With the increasing growth of information through smart devices, increasing the quality level of human life requires various computational paradigms presentation including the Internet of Things, fog, and cloud. Between these three paradigms, the cloud computing paradigm as an emerging technology adds cloud layer services to the edge of the network so that resource allocation operations occur close to the end-user to reduce resource processing time and network traffic overhead. Hence, the resource allocation problem for its providers in terms of presenting a suitable platform, by using computational paradigms is considered a challenge. In general, resource allocation approaches are divided into two methods, including auction-based methods(goal, increase profits for service providers-increase user satisfaction and usability) and optimization-based methods(energy, cost, network exploitation, Runtime, reduction of time delay). In this paper, according to the latest scientific achievements, a comprehensive literature study (CLS) on artificial intelligence methods based on resource allocation optimization without considering auction-based methods in various computing environments are provided such as cloud computing, Vehicular Fog Computing, wireless, IoT, vehicular networks, 5G networks, vehicular cloud architecture,machine-to-machine communication(M2M),Train-to-Train(T2T) communication network, Peer-to-Peer(P2P) network. Since deep learning methods based on artificial intelligence are used as the most important methods in resource allocation problems; Therefore, in this paper, resource allocation approaches based on deep learning are also used in the mentioned computational environments such as deep reinforcement learning, Q-learning technique, reinforcement learning, online learning, and also Classical learning methods such as Bayesian learning, Cummins clustering, Markov decision process.
translated by 谷歌翻译
With Twitter's growth and popularity, a huge number of views are shared by users on various topics, making this platform a valuable information source on various political, social, and economic issues. This paper investigates English tweets on the Russia-Ukraine war to analyze trends reflecting users' opinions and sentiments regarding the conflict. The tweets' positive and negative sentiments are analyzed using a BERT-based model, and the time series associated with the frequency of positive and negative tweets for various countries is calculated. Then, we propose a method based on the neighborhood average for modeling and clustering the time series of countries. The clustering results provide valuable insight into public opinion regarding this conflict. Among other things, we can mention the similar thoughts of users from the United States, Canada, the United Kingdom, and most Western European countries versus the shared views of Eastern European, Scandinavian, Asian, and South American nations toward the conflict.
translated by 谷歌翻译
Recently, many attempts have been made to construct a transformer base U-shaped architecture, and new methods have been proposed that outperformed CNN-based rivals. However, serious problems such as blockiness and cropped edges in predicted masks remain because of transformers' patch partitioning operations. In this work, we propose a new U-shaped architecture for medical image segmentation with the help of the newly introduced focal modulation mechanism. The proposed architecture has asymmetric depths for the encoder and decoder. Due to the ability of the focal module to aggregate local and global features, our model could simultaneously benefit the wide receptive field of transformers and local viewing of CNNs. This helps the proposed method balance the local and global feature usage to outperform one of the most powerful transformer-based U-shaped models called Swin-UNet. We achieved a 1.68% higher DICE score and a 0.89 better HD metric on the Synapse dataset. Also, with extremely limited data, we had a 4.25% higher DICE score on the NeoPolyp dataset. Our implementations are available at: https://github.com/givkashi/Focal-UNet
translated by 谷歌翻译
The COVID-19 pandemic has caused drastic alternations in human life in all aspects. The government's laws in this regard affected the lifestyle of all people. Due to this fact studying the sentiment of individuals is essential to be aware of the future impacts of the coming pandemics. To contribute to this aim, we proposed an NLP (Natural Language Processing) model to analyze open-text answers in a survey in Persian and detect positive and negative feelings of the people in Iran. In this study, a distilBert transformer model was applied to take on this task. We deployed three approaches to perform the comparison, and our best model could gain accuracy: 0.824, Precision: 0.824, Recall: 0.798, and F1 score: 0.804.
translated by 谷歌翻译
The recent breakthroughs in machine learning (ML) and deep learning (DL) have enabled many new capabilities across plenty of application domains. While most existing machine learning models require large memory and computing power, efforts have been made to deploy some models on resource-constrained devices as well. There are several systems that perform inference on the device, while direct training on the device still remains a challenge. On-device training, however, is attracting more and more interest because: (1) it enables training models on local data without needing to share data over the cloud, thus enabling privacy preserving computation by design; (2) models can be refined on devices to provide personalized services and cope with model drift in order to adapt to the changes of the real-world environment; and (3) it enables the deployment of models in remote, hardly accessible locations or places without stable internet connectivity. We summarize and analyze the-state-of-art systems research to provide the first survey of on-device training from a systems perspective.
translated by 谷歌翻译
需要下一代无线网络以同时满足各种服务和标准。为了解决即将到来的严格条件,开发了具有柔性设计,分解虚拟和可编程组件以及智能闭环控制等特征的新型开放式访问网络(O-RAN)。面对不断变化的情况,O-Ran切片被研究为确保网络服务质量(QoS)的关键策略。但是,必须动态控制不同的网络切片,以避免由环境快速变化引起的服务水平一致性(SLA)变化。因此,本文介绍了一个新颖的框架,能够通过智能提供的提供资源来管理网络切片。由于不同的异质环境,智能机器学习方法需要足够的探索来处理无线网络中最严厉的情况并加速收敛。为了解决这个问题,提出了一种新解决方案,基于基于进化的深度强化学习(EDRL),以加速和优化无线电访问网络(RAN)智能控制器(RIC)模块中的切片管理学习过程。为此,O-RAN切片被表示为Markov决策过程(MDP),然后最佳地解决了资源分配,以使用EDRL方法满足服务需求。在达到服务需求方面,仿真结果表明,所提出的方法的表现优于DRL基线62.2%。
translated by 谷歌翻译
翻译质量估计(QE)是预测机器翻译(MT)输出质量的任务,而无需任何参考。作为MT实际应用中的重要组成部分,这项任务已越来越受到关注。在本文中,我们首先提出了XLMRScore,这是一种基于使用XLM-Roberta(XLMR)模型计算的BertScore的简单无监督的QE方法,同时讨论了使用此方法发生的问题。接下来,我们建议两种减轻问题的方法:用未知令牌和预训练模型的跨语性对准替换未翻译的单词,以表示彼此之间的一致性单词。我们在WMT21 QE共享任务的四个低资源语言对上评估了所提出的方法,以及本文介绍的新的英语FARSI测试数据集。实验表明,我们的方法可以在两个零射击方案的监督基线中获得可比的结果,即皮尔森相关性的差异少于0.01,同时在所有低资源语言对中的平均低资源语言对中的无人看管竞争对手的平均水平超过8%的平均水平超过8%。 。
translated by 谷歌翻译
流程挖掘提供了各种算法来根据事件数据分析过程执行。过程发现是过程挖掘技术的最突出类别,旨在从事件日志中发现过程模型,但是,在使用现实生活数据时会导致意大利面模型。因此,已经在传统事件日志(即带有单个情况概念的事件日志)上提出了几种聚类技术,以降低过程模型的复杂性并发现案例的均匀子集。然而,在现实生活中,尤其是在企业对企业(B2B)过程的背景下,流程中涉及多个对象。最近,已经引入了以对象为中心的事件日志(OCEL)来捕获此类过程的信息,并在OCEL的顶部开发了几种过程发现技术。然而,提出的关于真实OCEL的发现技术的输出导致更具信息性但更复杂的模型。在本文中,我们提出了一种基于聚类的方法,用于群集在OCEL中类似对象,以简化所获得的过程模型。使用对实际B2B过程的案例研究,我们证明我们的方法降低了过程模型的复杂性,并生成了对象的相干子集,这些子集有助于最终用户获得对流程的见解。
translated by 谷歌翻译
众筹是从许多人的贡献中筹集资金的行为,是经济理论中最受欢迎的研究主题之一。由于众筹平台(CFP)通过提供多个功能来促进筹集资金的过程,因此我们应该考虑它们在市场上的存在和生存。在这项研究中,我们研究了平台功能在客户行为选择模型中的重要作用。特别是,我们提出了一个多项式logit模型,以在众筹设置中描述客户的(支持者')行为。我们通过讨论这些平台中的收入分享模型来进行。为此,我们得出结论,分类优化问题可能至关重要,以最大程度地提高平台的收入。在某些情况下,我们能够得出合理数量的数据,并实施两种众所周知的机器学习方法,例如多元回归和分类问题,以预测平台可以为每个到达客户提供的最佳分类。我们比较了这两种方法的结果,并研究了它们在所有情况下的性能。
translated by 谷歌翻译
近年来,多任务学习(MTL)已成为机器学习的一个有前途的话题,旨在通过利用有益信息来增强众多相关学习任务的性能。在训练阶段,大多数现有的多任务学习模型完全集中在目标任务数据上,而忽略了目标任务中包含的非目标任务数据。为了解决这个问题,与任何类别的分类问题相对应的Universum数据可以用作培训模型中的先验知识。这项研究着眼于使用Universum数据使用非目标任务数据的多任务学习的挑战,从而可以提高性能。它提出了带有Universum Data(UMTSVM)的多任务双支持向量机,并为其解决方案提供了两种方法。第一种方法考虑了UMTSVM的双重公式,并试图解决二次编程问题。第二种方法为UMTSVM制定了最小二乘版本,并将其称为LS-UMTSVM,以进一步提高概括性能。 LS-UMTSVM中两个原始问题的解决方案仅简化为求解两个线性方程系统,从而实现了一种非常简单,快速的方法。在几个流行的多任务数据集和医疗数据集上进行的数值实验证明了所提出的方法的效率。
translated by 谷歌翻译